user trust
Critical or Compliant? The Double-Edged Sword of Reasoning in Chain-of-Thought Explanations
Park, Eunkyu, Deng, Wesley Hanwen, Varadarajan, Vasudha, Yan, Mingxi, Kim, Gunhee, Sap, Maarten, Eslami, Motahhare
Explanations are often promoted as tools for transparency, but they can also foster confirmation bias; users may assume reasoning is correct whenever outputs appear acceptable. We study this double-edged role of Chain-of-Thought (CoT) explanations in multimodal moral scenarios by systematically perturbing reasoning chains and manipulating delivery tones. Specifically, we analyze reasoning errors in vision language models (VLMs) and how they impact user trust and the ability to detect errors. Our findings reveal two key effects: (1) users often equate trust with outcome agreement, sustaining reliance even when reasoning is flawed, and (2) the confident tone suppresses error detection while maintaining reliance, showing that delivery styles can override correctness. These results highlight how CoT explanations can simultaneously clarify and mislead, underscoring the need for NLP systems to provide explanations that encourage scrutiny and critical thinking rather than blind trust. All code will be released publicly.
- Asia > South Korea > Seoul > Seoul (0.40)
- North America > United States (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Trust in Vision-Language Models: Insights from a Participatory User Workshop
Chiatti, Agnese, Piccolo, Lara, Bernardini, Sara, Matteucci, Matteo, Schiaffonati, Viola
With the growing deployment of Vision-Language Models (VLMs), pre-trained on large image-text and video-text datasets, it is critical to equip users with the tools to discern when to trust these systems. However, examining how user trust in VLMs builds and evolves remains an open problem. This problem is exacerbated by the increasing reliance on AI models as judges for experimental validation, to bypass the cost and implications of running participatory design studies directly with users. Following a user-centred approach, this paper presents preliminary results from a workshop with prospective VLM users. Insights from this pilot workshop inform future studies aimed at contextualising trust metrics and strategies for participants' engagement to fit the case of user-VLM interaction.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > New York (0.04)
- Europe > Italy > Lombardy > Milan (0.04)
- (2 more...)
- Health & Medicine (0.68)
- Education (0.46)
Unlearning Imperative: Securing Trustworthy and Responsible LLMs through Engineered Forgetting
Kang, James Jin, Bui, Dang, Pham, Thanh, Ling, Huo-Chong
The growing use of large language models in sensitive domains has exposed a critical weakness: the inability to ensure that private information can be permanently forgotten. Yet these systems still lack reliable mechanisms to guarantee that sensitive information can be permanently removed once it has been used. Retraining from the beginning is prohibitively costly, and existing unlearning methods remain fragmented, difficult to verify, and often vulnerable to recovery. This paper surveys recent research on machine unlearning for LLMs and considers how far current approaches can address these challenges. We review methods for evaluating whether forgetting has occurred, the resilience of unlearned models against adversarial attacks, and mechanisms that can support user trust when model complexity or proprietary limits restrict transparency. Technical solutions such as differential privacy, homomorphic encryption, federated learning, and ephemeral memory are examined alongside institutional safeguards including auditing practices and regulatory frameworks. The review finds steady progress, but robust and verifiable unlearning is still unresolved. Efficient techniques that avoid costly retraining, stronger defenses against adversarial recovery, and governance structures that reinforce accountability are needed if LLMs are to be deployed safely in sensitive applications. By integrating technical and organizational perspectives, this study outlines a pathway toward AI systems that can be required to forget, while maintaining both privacy and public trust.
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- North America > United States > California > Riverside County > Riverside (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Government (1.00)
Preliminary Quantitative Study on Explainability and Trust in AI Systems
Large-scale AI models such as GPT -4 have accelerated the deployment of artificial intelligence across critical domains including law, healthcare, and finance, raising urgent questions about trust and transparency. This study investigates the relationship between explainability and user trust in AI systems through a quantitative experimental design. Using an interactive, web-based loan approval simulation, we compare how different types of explanations--ranging from basic feature importance to interactive counterfactuals--influence perceived trust. Results suggest that interactivity enhances both user engagement and confidence, and that the clarity and relevance of explanations are key determinants of trust. These findings contribute empirical evidence to the growing field of human-centered explainable AI, highlighting measurable effects of explainability design on user perception.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Maryland > Prince George's County > College Park (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- (2 more...)
Evaluating Node-tree Interfaces for AI Explainability
Wang, Lifei, Friedman, Natalie, Zhu, Chengchao, Zhu, Zeshu, Mountford, S. Joy
As large language models (LLMs) become ubiquitous in workplace tools and decision-making processes, ensuring explainability and fostering user trust are critical. Although advancements in LLM engineering continue, human-centered design is still catching up, particularly when it comes to embedding transparency and trust into AI interfaces. This study evaluates user experiences with two distinct AI interfaces - node-tree interfaces and chatbot interfaces - to assess their performance in exploratory, follow-up inquiry, decision-making, and problem-solving tasks. Our design-driven approach introduces a node-tree interface that visually structures AI-generated responses into hierarchically organized, interactive nodes, allowing users to navigate, refine, and follow up on complex information. In a comparative study with n=20 business users, we observed that while the chatbot interface effectively supports linear, step-by-step queries, it is the node-tree interface that enhances brainstorming. Quantitative and qualitative findings indicate that node-tree interfaces not only improve task performance and decision-making support but also promote higher levels of user trust by preserving context. Our findings suggest that adaptive AI interfaces capable of switching between structured visualizations and conversational formats based on task requirements can significantly enhance transparency and user confidence in AI-powered systems. This work contributes actionable insights to the fields of human-robot interaction and AI design, particularly for enterprise applications where trust-building is critical for teams.
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- Europe (0.05)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Explainability of Algorithms
The opaqueness of many complex machine learning algorithms is often mentioned as one of the main obstacles to the ethical development of artificial intelligence (AI). But what does it mean for an algorithm to be opaque? Highly complex algorithms such as artificial neural networks process enormous volumes of data in parallel along multiple hidden layers of interconnected nodes, rendering their inner workings epistemically inaccessible to any human being, including their designers and developers; they are "black boxes" for all their stakeholders. But opaqueness is not always the inevitable result of technical complexity. Sometimes, the way an algorithm works is intentionally hidden from view for proprietary reasons, especially in commercial automated decision systems, creating an entirely different type of opaqueness. In the first part of the chapter, we will examine these two ways of understanding opacity and the ethical implications that stem from each of them. In the second part, we explore the different explanatory methods that have been developed in computer science to overcome an AI system's technical opaqueness. As the analysis shows, explainable AI (XAI) still faces numerous challenges.
- North America > United States > New York (0.05)
- South America > Colombia > Bogotá D.C. > Bogotá (0.04)
- North America > United States > Wisconsin (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Government (0.93)
Is Trust Correlated With Explainability in AI? A Meta-Analysis
This study critically examines the commonly held assumption that explicability in artificial intelligence (AI) systems inherently boosts user trust. Utilizing a meta - analytical approach, we conducted a comprehensive examination of the existing literature t o explore the relationship between AI explainability and trust. Our analysis, incorporating data from 90 studies, reveals a statistically significant but moderate positive correlation between the explainability of AI systems and the trust they engender amo ng users. This indicates that while explainability contributes to building trust, it is not the sole or predominant factor in this equation. In addition to academic contributions to the field of Explainable AI (XAI), this research highlights its broader so cio - technical implications, particularly in promoting accountability and fostering user trust in critical domains such as healthcare and justice. By addressing challenges like algorithmic bias and ethical transparency, the study underscores the need for eq uitable and sustainable AI adoption. Rather than focusing solely on immediate trust, we emphasize the normative importance of fostering authentic and enduring trustworthiness in AI systems.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > Canada > Ontario > Durham Region > Oshawa (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
Explainable AI Components for Narrative Map Extraction
Keith, Brian, German, Fausto, Krokos, Eric, Joseph, Sarah, North, Chris
As narrative extraction systems grow in complexity, establishing user trust through interpretable and explainable outputs becomes increasingly critical. This paper presents an evaluation of an Explainable Artificial Intelligence (XAI) system for narrative map extraction that provides meaningful explanations across multiple levels of abstraction. Our system integrates explanations based on topical clusters for low-level document relationships, connection explanations for event relationships, and high-level structure explanations for overall narrative patterns. In particular, we evaluate the XAI system through a user study involving 10 participants that examined narratives from the 2021 Cuban protests. The analysis of results demonstrates that participants using the explanations made the users trust in the system's decisions, with connection explanations and important event detection proving particularly effective at building user confidence. Survey responses indicate that the multi-level explanation approach helped users develop appropriate trust in the system's narrative extraction capabilities.
- North America > United States > New York > New York County > New York City (0.05)
- South America > Chile > Antofagasta Region > Antofagasta Province > Antofagasta (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (8 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
VizTrust: A Visual Analytics Tool for Capturing User Trust Dynamics in Human-AI Communication
Wang, Xin, Jesso, Stephanie Tulk, Kojaku, Sadamori, Neyens, David M, Kim, Min Sun
Trust plays a fundamental role in shaping the willingness of users to engage and collaborate with artificial intelligence (AI) systems. Yet, measuring user trust remains challenging due to its complex and dynamic nature. While traditional survey methods provide trust levels for long conversations, they fail to capture its dynamic evolution during ongoing interactions. Here, we present VizTrust, which addresses this challenge by introducing a real-time visual analytics tool that leverages a multi-agent collaboration system to capture and analyze user trust dynamics in human-agent communication. Built on established human-computer trust scales-competence, integrity, benevolence, and predictability-, VizTrust enables stakeholders to observe trust formation as it happens, identify patterns in trust development, and pinpoint specific interaction elements that influence trust. Our tool offers actionable insights into human-agent trust formation and evolution in real time through a dashboard, supporting the design of adaptive conversational agents that responds effectively to user trust signals.
- Asia > Japan > Honshū > Kantō > Kanagawa Prefecture > Yokohama (0.06)
- North America > United States > New York > Broome County > Binghamton (0.05)
- North America > United States > South Carolina (0.04)
- (4 more...)
- Research Report (1.00)
- Questionnaire & Opinion Survey (0.68)
Evaluating Human Trust in LLM-Based Planners: A Preliminary Study
Chen, Shenghui, Yang, Yunhao, Boggess, Kayla, Heo, Seongkook, Feng, Lu, Topcu, Ufuk
Large Language Models (LLMs) are increasingly used for planning tasks, offering unique capabilities not found in classical planners such as generating explanations and iterative refinement. However, trust--a critical factor in the adoption of planning systems--remains underexplored in the context of LLM-based planning tasks. This study bridges this gap by comparing human trust in LLM-based planners with classical planners through a user study in a Planning Domain Definition Language (PDDL) domain. Combining subjective measures, such as trust questionnaires, with objective metrics like evaluation accuracy, our findings reveal that correctness is the primary driver of trust and performance. Explanations provided by the LLM improved evaluation accuracy but had limited impact on trust, while plan refinement showed potential for increasing trust without significantly enhancing evaluation accuracy.
- North America > United States > Texas > Travis County > Austin (0.05)
- North America > United States > Virginia (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (5 more...)
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (0.88)
- Research Report > Experimental Study (0.68)